Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 10.359
Filtrar
1.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600132

RESUMO

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Assuntos
Córtex Auditivo , Localização de Som , Córtex Visual , Percepção Visual/fisiologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Córtex Visual/fisiologia , Estimulação Luminosa/métodos , Estimulação Acústica/métodos
2.
J Neural Eng ; 21(2)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38579741

RESUMO

Objective. The auditory steady-state response (ASSR) allows estimation of hearing thresholds. The ASSR can be estimated from electroencephalography (EEG) recordings from electrodes positioned on both the scalp and within the ear (ear-EEG). Ear-EEG can potentially be integrated into hearing aids, which would enable automatic fitting of the hearing device in daily life. The conventional stimuli for ASSR-based hearing assessment, such as pure tones and chirps, are monotonous and tiresome, making them inconvenient for repeated use in everyday situations. In this study we investigate the use of natural speech sounds for ASSR estimation.Approach.EEG was recorded from 22 normal hearing subjects from both scalp and ear electrodes. Subjects were stimulated monaurally with 180 min of speech stimulus modified by applying a 40 Hz amplitude modulation (AM) to an octave frequency sub-band centered at 1 kHz. Each 50 ms sub-interval in the AM sub-band was scaled to match one of 10 pre-defined levels (0-45 dB sensation level, 5 dB steps). The apparent latency for the ASSR was estimated as the maximum average cross-correlation between the envelope of the AM sub-band and the recorded EEG and was used to align the EEG signal with the audio signal. The EEG was then split up into sub-epochs of 50 ms length and sorted according to the stimulation level. ASSR was estimated for each level for both scalp- and ear-EEG.Main results. Significant ASSRs with increasing amplitude as a function of presentation level were recorded from both scalp and ear electrode configurations.Significance. Utilizing natural sounds in ASSR estimation offers the potential for electrophysiological hearing assessment that are more comfortable and less fatiguing compared to existing ASSR methods. Combined with ear-EEG, this approach may allow convenient hearing threshold estimation in everyday life, utilizing ambient sounds. Additionally, it may facilitate both initial fitting and subsequent adjustments of hearing aids outside of clinical settings.


Assuntos
Audição , Som , Humanos , Estimulação Acústica/métodos , Limiar Auditivo/fisiologia , Eletroencefalografia/métodos
3.
Hum Brain Mapp ; 45(4): e26653, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38488460

RESUMO

Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.


Assuntos
Ilusões , Percepção da Fala , Humanos , Masculino , Feminino , Percepção Auditiva/fisiologia , Fala/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Teorema de Bayes , Incerteza , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos
4.
Fa Yi Xue Za Zhi ; 40(1): 15-19, 2024 Feb 25.
Artigo em Inglês, Chinês | MEDLINE | ID: mdl-38500456

RESUMO

OBJECTIVES: To study the application of CE-Chirp in the evaluation of hearing impairment in forensic medicine by testing the auditory brainstem response (ABR) in adults using CE-Chirp to analyze the relationship between the V-wave response threshold of CE-Chirp ABR test and the pure tone hearing threshold. METHODS: Subjects (aged 20-77 with a total of 100 ears) who underwent CE-Chirp ABR test in Changzhou De'an Hospital from January 2018 to June 2019 were selected to obtain the V-wave response threshold, and pure tone air conduction hearing threshold tests were conducted at 0.5, 1.0, 2.0 and 4.0 kHz, respectively, to obtain pure tone listening threshold. The differences and statistical differences between the average pure tone hearing threshold and V-wave response threshold were compared in different hearing levels and different age groups. The correlation, differences and statistical differences between the two tests at each frequency were analyzed for all subjects. The linear regression equation for estimating pure tone hearing threshold for all subjects CE-Chirp ABR V-wave response threshold was established, and the feasibility of the equation was tested. RESULTS: There was no statistical significance in the CE-Chirp ABR response threshold and pure tone hearing threshold difference between different hearing level groups and different age groups (P>0.05). There was a good correlation between adult CE-Chirp ABR V-wave response threshold and pure tone hearing threshold with statistical significance (P<0.05), and linear regression analysis showed a significant linear correlation between the two (P<0.05). CONCLUSIONS: The use of CE-Chirp ABR V-wave response threshold can be used to evaluate subjects' pure tone hearing threshold under certain conditions, and can be used as an audiological test method for forensic hearing impairment assessment.


Assuntos
Perda Auditiva , Audição , Adulto , Humanos , Estimulação Acústica/métodos , Limiar Auditivo/fisiologia , Audição/fisiologia , Perda Auditiva/diagnóstico , Audiometria de Tons Puros/métodos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia
5.
PLoS Biol ; 22(3): e3002534, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38466713

RESUMO

Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Fala , Retroalimentação , Eletroencefalografia/métodos , Córtex Auditivo/fisiologia , Estimulação Acústica/métodos
6.
J Neurosci ; 44(17)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38508715

RESUMO

Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.


Assuntos
Estimulação Acústica , Córtex Auditivo , Potenciais Evocados Visuais , Magnetoencefalografia , Estimulação Luminosa , Humanos , Córtex Auditivo/fisiologia , Magnetoencefalografia/métodos , Feminino , Masculino , Adulto , Estimulação Luminosa/métodos , Potenciais Evocados Visuais/fisiologia , Estimulação Acústica/métodos , Modelos Neurológicos , Adulto Jovem , Potenciais Evocados Auditivos/fisiologia , Neurônios/fisiologia , Mapeamento Encefálico/métodos
7.
Dev Neurobiol ; 84(2): 47-58, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38466218

RESUMO

In sexually dimorphic zebra finches (Taeniopygia guttata), only males learn to sing their father's song, whereas females learn to recognize the songs of their father or mate but cannot sing themselves. Memory of learned songs is behaviorally expressed in females by preferring familiar songs over unfamiliar ones. Auditory association regions such as the caudomedial mesopallium (CMM; or caudal mesopallium) have been shown to be key nodes in a network that supports preferences for learned songs in adult females. However, much less is known about how song preferences develop during the sensitive period of learning in juvenile female zebra finches. In this study, we used blood-oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) to trace the development of a memory-based preference for the father's song in female zebra finches. Using BOLD fMRI, we found that only in adult female zebra finches with a preference for learned song over novel conspecific song, neural selectivity for the father's song was localized in the thalamus (dorsolateral nucleus of the medial thalamus; part of the anterior forebrain pathway, AFP) and in CMM. These brain regions also showed a selective response in juvenile female zebra finches, although activation was less prominent. These data reveal that neural responses in CMM, and perhaps also in the AFP, are shaped during development to support behavioral preferences for learned songs.


Assuntos
Tentilhões , Vocalização Animal , Masculino , Animais , Feminino , Vocalização Animal/fisiologia , alfa-Fetoproteínas/metabolismo , Tentilhões/metabolismo , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Prosencéfalo/metabolismo , Imageamento por Ressonância Magnética/métodos
8.
PLoS One ; 19(3): e0299911, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38451925

RESUMO

INTRODUCTION: The functional evaluation of auditory-nerve activity in spontaneous conditions has remained elusive in humans. In animals, the frequency analysis of the round-window electrical noise recorded by means of electrocochleography yields a frequency peak at around 900 to 1000 Hz, which has been proposed to reflect auditory-nerve spontaneous activity. Here, we studied the spectral components of the electrical noise obtained from cochlear implant electrocochleography in humans. METHODS: We recruited adult cochlear implant recipients from the Clinical Hospital of the Universidad de Chile, between the years 2021 and 2022. We used the AIM System from Advanced Bionics® to obtain single trial electrocochleography signals from the most apical electrode in cochlear implant users. We performed a protocol to study spontaneous activity and auditory responses to 0.5 and 2 kHz tones. RESULTS: Twenty subjects including 12 females, with a mean age of 57.9 ± 12.6 years (range between 36 and 78 years) were recruited. The electrical noise of the single trial cochlear implant electrocochleography signal yielded a reliable peak at 3.1 kHz in 55% of the cases (11 out of 20 subjects), while an oscillatory pattern that masked the spectrum was observed in seven cases. In the other two cases, the single-trial noise was not classifiable. Auditory stimulation at 0.5 kHz and 2.0 kHz did not change the amplitude of the 3.1 kHz frequency peak. CONCLUSION: We found two main types of noise patterns in the frequency analysis of the single-trial noise from cochlear implant electrocochleography, including a peak at 3.1 kHz that might reflect auditory-nerve spontaneous activity, while the oscillatory pattern probably corresponds to an artifact.


Assuntos
Implante Coclear , Implantes Cocleares , Adulto , Idoso , Feminino , Humanos , Pessoa de Meia-Idade , Estimulação Acústica/métodos , Audiometria de Resposta Evocada/métodos , Nervo Coclear/fisiologia , Ruído , Masculino
9.
Sci Rep ; 14(1): 5900, 2024 03 11.
Artigo em Inglês | MEDLINE | ID: mdl-38467716

RESUMO

Idiopathic tinnitus is a common and complex disorder with no established cure. The CAABT (Cochleural Alternating Acoustic Beam Therapy CAABT), is a personalized sound therapy designed to target specific tinnitus frequencies and effectively intervene in tinnitus according to clinical tinnitus assessment. This study aimed to compare the effectiveness of the CAABT and Traditional Sound Therapy (TST) in managing chronic idiopathic tinnitus. This was a randomized, double-blind, parallel-group, single-center prospective study. Sixty adult patients with tinnitus were recruited and randomly assigned to the CAABT or TST group in a 1:1 ratio using a computer-generated randomization. The treatment lasted for 12 weeks, and participants underwent assessments using the tinnitus handicap inventory (THI), visual analog scale (VAS), tinnitus loudness measurements, and resting-state functional magnetic resonance imaging (rs-fMRI). Both groups showed significant reductions in THI scores, VAS scores, and tinnitus loudness after treatment. However, CAABT showed superiority to TST in THI Functional (p = 0.018), THI Emotional (p = 0.015), THI Catastrophic (p = 0.022), THI total score (p = 0.005) as well as VAS score (p = 0.022). More interesting, CAABT showed superiority to TST in the changes of THI scores, and VAS scores from baseline. The rs-fMRI results showed significant changes in the precuneus before and after treatment in both groups. Moreover, the CAABT group showed more changes in brain regions compared to the TST. No side effects were observed. These findings suggest that CAABT may be a promising treatment option for chronic idiopathic tinnitus, providing significant improvements in tinnitus-related symptoms and brain activity.Trial registration: ClinicalTrials.gov:NCT02774122.


Assuntos
Zumbido , Adulto , Humanos , Zumbido/diagnóstico por imagem , Zumbido/terapia , Estudos Prospectivos , Som , Estimulação Acústica/métodos , Acústica , Resultado do Tratamento
10.
Commun Biol ; 7(1): 291, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459110

RESUMO

When engaged in a conversation, one receives auditory information from the other's speech but also from their own speech. However, this information is processed differently by an effect called Speech-Induced Suppression. Here, we studied brain representation of acoustic properties of speech in natural unscripted dialogues, using electroencephalography (EEG) and high-quality speech recordings from both participants. Using encoding techniques, we were able to reproduce a broad range of previous findings on listening to another's speech, and achieving even better performances when predicting EEG signal in this complex scenario. Furthermore, we found no response when listening to oneself, using different acoustic features (spectrogram, envelope, etc.) and frequency bands, evidencing a strong effect of SIS. The present work shows that this mechanism is present, and even stronger, during natural dialogues. Moreover, the methodology presented here opens the possibility of a deeper understanding of the related mechanisms in a wider range of contexts.


Assuntos
Eletroencefalografia , Fala , Humanos , Fala/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Encéfalo , Mapeamento Encefálico/métodos
11.
Autism Res ; 17(2): 280-310, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38334251

RESUMO

Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Percepção da Fala , Adulto , Criança , Humanos , Percepção da Fala/fisiologia , Narração , Percepção Visual/fisiologia , Transtorno do Espectro Autista/diagnóstico por imagem , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos
12.
Cell Rep ; 43(3): 113864, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38421870

RESUMO

The neural mechanisms underlying novelty detection are not well understood, especially in relation to behavior. Here, we present single-unit responses from the primary auditory cortex (A1) from two monkeys trained to detect deviant tones amid repetitive ones. Results show that monkeys can detect deviant sounds, and there is a strong correlation between late neuronal responses (250-350 ms after deviant onset) and the monkeys' perceptual decisions. The magnitude and timing of both neuronal and behavioral responses are increased by larger frequency differences between the deviant and standard tones and by increasing the number of standard tones preceding the deviant. This suggests that A1 neurons encode novelty detection in behaving monkeys, influenced by stimulus relevance and expectations. This study provides evidence supporting aspects of predictive coding in the sensory cortex.


Assuntos
Córtex Auditivo , Potenciais Evocados Auditivos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Neurônios/fisiologia
13.
Neuropsychologia ; 196: 108822, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38342179

RESUMO

Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.


Assuntos
Localização de Som , Animais , Humanos , Localização de Som/fisiologia , Cegueira , Ruído , Acústica , Sinais (Psicologia) , Mascaramento Perceptivo , Estimulação Acústica/métodos
14.
Acta Psychol (Amst) ; 244: 104195, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38412710

RESUMO

This study adopts a cross-linguistic perspective and investigates how musical expertise affects the perception of duration and pitch in language. Native speakers of Chinese (N = 44) and Estonian (N = 46), each group subdivided into musicians and non-musicians, participated in a mismatch negativity (MMN) experiment where they passively listened to both Chinese and Estonian stimuli, followed by a behavioral experiment where they attentively discriminated the stimuli in the non-native language (i.e., Chinese to Estonian participants and Estonian to Chinese participants). In both experiments, stimuli of duration change, pitch change, and duration plus pitch change were discriminated. We found higher behavioral sensitivity among Chinese musicians than non-musicians in perceiving the duration change in Estonian and higher behavioral sensitivity among Estonian musicians than non-musicians in perceiving all types of changes in Chinese, but no corresponding effect was found in the MMN results, which suggests a more salient effect of musical expertise on foreign language processing when attention is required. Secondly, Chinese musicians did not outperform non-musicians in attentively discriminating the pitch-related stimuli in Estonian, suggesting that musical expertise can be overridden by tonal language experience when perceiving foreign linguistic pitch, especially when an attentive discrimination task is administered. Thirdly, we found larger MMN among Chinese and Estonian musicians than their non-musician counterparts in perceiving the largest deviant (i.e., duration plus pitch) in their native language. Taken together, our results demonstrate a positive effect of musical expertise on language processing.


Assuntos
Música , Percepção da Altura Sonora , Humanos , Eletroencefalografia/métodos , Idioma , Linguística , Estimulação Acústica/métodos
15.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38319934

RESUMO

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Assuntos
Mapeamento Encefálico , Percepção Visual , Humanos , Idoso , Teorema de Bayes , Percepção Visual/fisiologia , Encéfalo/fisiologia , Atenção/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Estimulação Luminosa/métodos , Imageamento por Ressonância Magnética
16.
Neuroreport ; 35(4): 269-276, 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38305131

RESUMO

This study explored how the human brain perceives stickiness through tactile and auditory channels, especially when presented with congruent or incongruent intensity cues. In our behavioral and functional MRI (fMRI) experiments, we presented participants with adhesive tape stimuli at two different intensities. The congruent condition involved providing stickiness stimuli with matching intensity cues in both auditory and tactile channels, whereas the incongruent condition involved cues of different intensities. Behavioral results showed that participants were able to distinguish between the congruent and incongruent conditions with high accuracy. Through fMRI searchlight analysis, we tested which brain regions could distinguish between congruent and incongruent conditions, and as a result, we identified the superior temporal gyrus, known primarily for auditory processing. Interestingly, we did not observe any significant activation in regions associated with somatosensory or motor functions. This indicates that the brain dedicates more attention to auditory cues than to tactile cues, possibly due to the unfamiliarity of conveying the sensation of stickiness through sound. Our results could provide new perspectives on the complexities of multisensory integration, highlighting the subtle yet significant role of auditory processing in understanding tactile properties such as stickiness.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Lobo Temporal , Percepção Visual/fisiologia
17.
Infancy ; 29(3): 355-385, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38421947

RESUMO

To efficiently recognize words, children learning an intonational language like English should avoid interpreting pitch-contour variation as signaling lexical contrast, despite the relevance of pitch at other levels of structure. Thus far, the developmental time-course with which English-learning children rule out pitch as a contrastive feature has been incompletely characterized. Prior studies have tested diverse lexical contrasts and have not tested beyond 30 months. To specify the developmental trajectory over a broader age range, we extended a prior study (Quam & Swingley, 2010), in which 30-month-olds and adults disregarded pitch changes, but attended to vowel changes, in newly learned words. Using the same phonological contrasts, we tested 3- to 5-year-olds, 24-month-olds, and 18-month-olds. The older two groups were tested using the language-guided-looking method. The oldest group attended to vowels but not pitch. Surprisingly, 24-month-olds ignored not just pitch but sometimes vowels as well-conflicting with prior findings of phonological constraint at 24 months. The youngest group was tested using the Switch habituation method, half with additional phonetic variability in training. Eighteen-month-olds learned both pitch-contrasted and vowel-contrasted words, whether or not additional variability was present. Thus, native-language phonological constraint was not evidenced prior to 30 months (Quam & Swingley, 2010). We contextualize our findings within other recent work in this area.


Assuntos
Percepção da Fala , Adulto , Criança , Humanos , Pré-Escolar , Estimulação Acústica/métodos , Idioma , Aprendizagem , Desenvolvimento da Linguagem
18.
J Assoc Res Otolaryngol ; 25(2): 91-102, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38409555

RESUMO

At the 2004 Midwinter Meeting of the Association for Research in Otolaryngology, Glenis Long and her colleagues introduced a method for measuring distortion-product otoacoustic emissions (DPOAEs) using primary-tone stimuli whose instantaneous frequencies vary continuously with time. In contrast to standard OAE measurement methods, in which emissions are measured in the sinusoidal steady state using discrete tones of well-defined frequency, the swept-tone method sweeps across frequency, often at rates exceeding 1 oct/s. The resulting response waveforms are then analyzed using an appropriate filter (e.g., by least-squares fitting). Although introduced as a convenient way of studying DPOAE fine structure by separating the total OAE into distortion and reflection components, the swept-tone method has since been extended to stimulus-frequency emissions and has proved an efficient and valuable tool for probing cochlear mechanics. One day-a long time coming-swept tones may even find their way into the audiology clinic.


Assuntos
Cóclea , Emissões Otoacústicas Espontâneas , Feminino , Humanos , Estimulação Acústica/métodos , Emissões Otoacústicas Espontâneas/fisiologia , Cóclea/fisiologia
19.
J Neurosci ; 44(14)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38350998

RESUMO

Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.


Assuntos
Córtex Auditivo , Percepção Auditiva , Masculino , Feminino , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Encéfalo/fisiologia , Magnetoencefalografia , Córtex Auditivo/fisiologia
20.
Codas ; 36(2): e20220261, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38324806

RESUMO

PURPOSE: The inter-aural time difference (ITD) and inter-aural level difference (ILD) are important acoustic cues for horizontal localization and spatial release from masking. These cues are encoded based on inter-aural comparisons of tonotopically matched binaural inputs. Therefore, binaural coherence or the interaural spectro-temporal similarity is a pre-requisite for encoding ITD and ILD. The modulation depth of envelope is an important envelope characteristic that helps in encoding the envelope-ITD. However, inter-aural difference in modulation depth can result in reduced binaural coherence and poor representation of binaural cues as in the case with reverberation, noise and compression in cochlear implants and hearing aids. This study investigated the effect of inter-aural modulation depth difference on the ITD thresholds for an amplitude-modulated noise in normal hearing young adults. METHODS: An amplitude modulated high pass filtered noise with varying modulation depth differences was presented sequentially through headphones. In one ear, the modulation depth was retained at 90% and in the other ear it varied from 90% to 50%. The ITD thresholds for modulation frequencies of 8 Hz and 16 Hz were estimated as a function of the inter-aural modulation depth difference. RESULTS: The Friedman test findings revealed a statistically significant increase in the ITD threshold with an increase in the inter-aural modulation depth difference for 8 Hz and 16 Hz. CONCLUSION: The results indicate that the inter-aural differences in the modulation depth negatively impact ITD perception for an amplitude-modulated high pass filtered noise.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Adulto Jovem , Humanos , Estimulação Acústica/métodos , Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...